612 research outputs found

    Evidence-Efficient Affinity Propagation Scheme for Virtual Machine Placement in Data Center

    Get PDF
    In cloud data center, without efficient virtual machine placement, the overload of any types of resources on physical machines (PM) can easily cause the waste of other types of resources, and frequent costly virtual machine (VM) migration, which further negatively affects quality of service (QoS). To address this problem, in this paper we propose an evidence-efficient affinity propagation scheme for VM placement (EEAP-VMP), which is capable of balancing the workload across various types of resources on the running PMs. Our approach models the problem of searching the desirable destination hosts for the liveVMmigration as the propagation of responsibility and availability. The sum of responsibility and availability represent the accumulated evidence for the selection of candidate destination hosts for the VMs to be migrated. Further, in combination with the presented selection criteria for destination hosts. Extensive experiments are conducted to compare our EEAP-VMP method with the previousVMplacement methods. The experimental results demonstrate that the EEAP-VMP method is highly effective on reducing VM migrations and energy consumption of data centers and in balancing the workload of PMs

    Characterizations of bilocality and nn-locality of correlation tensors

    Full text link
    In the literature, bilocality and nn-locality of correlation tensors (CTs) are described by integration local hidden variable models (called C-LHVMs) rather than by summation LHVMs (called D-LHVMs). Obviously, C-LHVMs are easier to be constructed than D-LHVMs, while the later are easier to be used than the former, e.g., in discussing on the topological and geometric properties of the sets of all bilocal and of all nn-local CTs. In this context, one may ask whether the two descriptions are equivalent. In the present work, we first establish some equivalent characterizations of bilocality of a tripartite CT {\bf{P}}=\Lbrack P(abc|xyz)\Rbrack, implying that the two descriptions of bilocality are equivalent. As applications, we prove that all bilocal CTs with the same size form a compact path-connected set that has many star-convex subsets. Secondly, we introduce and discuss the bilocality of a tripartite probability tensor (PT) {\bf{P}}=\Lbrack P(abc)\Rbrack, including equivalent characterizations and properties of bilocal PTs. Lastly, we obtain corresponding results about nn-locality of n+1n+1-partite CTs {\bf{P}}=\Lbrack P({\bf{a}}b|{\bf{x}}y)\Rbrack and PTs {\bf{P}}=\Lbrack P({\bf{a}}b)\Rbrack, respectively

    Privacy-Preserving Distributed SVD via Federated Power

    Full text link
    Singular value decomposition (SVD) is one of the most fundamental tools in machine learning and statistics.The modern machine learning community usually assumes that data come from and belong to small-scale device users. The low communication and computation power of such devices, and the possible privacy breaches of users' sensitive data make the computation of SVD challenging. Federated learning (FL) is a paradigm enabling a large number of devices to jointly learn a model in a communication-efficient way without data sharing. In the FL framework, we develop a class of algorithms called FedPower for the computation of partial SVD in the modern setting. Based on the well-known power method, the local devices alternate between multiple local power iterations and one global aggregation to improve communication efficiency. In the aggregation, we propose to weight each local eigenvector matrix with Orthogonal Procrustes Transformation (OPT). Considering the practical stragglers' effect, the aggregation can be fully participated or partially participated, where for the latter we propose two sampling and aggregation schemes. Further, to ensure strong privacy protection, we add Gaussian noise whenever the communication happens by adopting the notion of differential privacy (DP). We theoretically show the convergence bound for FedPower. The resulting bound is interpretable with each part corresponding to the effect of Gaussian noise, parallelization, and random sampling of devices, respectively. We also conduct experiments to demonstrate the merits of FedPower. In particular, the local iterations not only improve communication efficiency but also reduce the chance of privacy breaches

    Quantum multipartite maskers vs quantum error-correcting codes

    Full text link
    Since masking of quantum information was introduced by Modi et al. in [PRL 120, 230501 (2018)], many discussions on this topic have been published. In this paper, we consider relationship between quantum multipartite maskers (QMMs) and quantum error-correcting codes (QECCs). We say that a subset QQ of pure states of a system KK can be masked by an operator SS into a multipartite system \H^{(n)} if all of the image states S∣ψ S|\psi\> of states ∣ψ |\psi\> in QQ have the same marginal states on each subsystem. We call such an SS a QMM of QQ. By establishing an expression of a QMM, we obtain a relationship between QMMs and QECCs, which reads that an isometry is a QMM of all pure states of a system if and only if its range is a QECC of any one-erasure channel. As an application, we prove that there is no an isometric universal masker from \C^2 into \C^2\otimes\C^2\otimes\C^2 and then the states of \C^3 can not be masked isometrically into \C^2\otimes\C^2\otimes\C^2. This gives a consummation to a main result and leads to a negative answer to an open question in [PRA 98, 062306 (2018)]. Another application is that arbitrary quantum states of \C^d can be completely hidden in correlations between any two subsystems of the tripartite system \C^{d+1}\otimes\C^{d+1}\otimes\C^{d+1}, while arbitrary quantum states cannot be completely hidden in the correlations between subsystems of a bipartite system [PRL 98, 080502 (2007)].Comment: This is a revision about arXiv:2004.14540. In the present version, kk and jj old Eq. (2.2) have been exchanged and the followed three equations have been correcte
    • …
    corecore